30 research outputs found

    On Some Integrated Approaches to Inference

    Full text link
    We present arguments for the formulation of unified approach to different standard continuous inference methods from partial information. It is claimed that an explicit partition of information into a priori (prior knowledge) and a posteriori information (data) is an important way of standardizing inference approaches so that they can be compared on a normative scale, and so that notions of optimal algorithms become farther-reaching. The inference methods considered include neural network approaches, information-based complexity, and Monte Carlo, spline, and regularization methods. The model is an extension of currently used continuous complexity models, with a class of algorithms in the form of optimization methods, in which an optimization functional (involving the data) is minimized. This extends the family of current approaches in continuous complexity theory, which include the use of interpolatory algorithms in worst and average case settings

    Information-Based Nonlinear Approximation: An Average Case Setting

    Get PDF
    Nonlinear approximation has usually been studied under deterministic assumption and complete information about the underlying functions. We assume only partial information and we are interested in the average case error and complexity of approximation. It turns out that the problem can be essentially split into two independent problems related to average case nonlinear (restricted) approximation from complete information, and average case unrestricted approximation from partial information. The results are then applied to average case piecewise polynomial approximation, and to average case approximation of real sequences

    Worst case tractability of linear problems in the presence of noise: linear information

    Full text link
    We study the worst case tractability of multivariate linear problems defined on separable Hilbert spaces. Information about a problem instance consists of noisy evaluations of arbitrary bounded linear functionals, where the noise is either deterministic or random. The cost of a single evaluation depends on its precision and is controlled by a cost function. We establish mutual interactions between tractability of a problem with noisy information, the cost function, and tractability of the same problem, but with exact information

    09391 Abstracts Collection -- Algorithms and Complexity for Continuous Problems

    Get PDF
    From 20.09.09 to 25.09.09, the Dagstuhl Seminar 09391 Algorithms and Complexity for Continuous Problems was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    How to Benefit from Noise

    Get PDF
    We compare nonadaptive and adaptive designs for estimating linear functionals in the (minimax) statistical setting. It is known that adaptive designs are no better in the worst case setting for convex and symmetric classes, as well as in the average case setting with Gaussian distributions

    Multivariate Lp approximation of Hölder classes in the presence of Gaussian noise

    No full text
    Non UBCUnreviewedAuthor affiliation: University of WarsawFacult

    Complexity of Neural Network Approximation with Limited Information: a Worst Case Approach

    Get PDF
    In neural network theory the complexity of constructing networks to approximate input-output (i-o) functions has been of recent interest. We study such complexity in somewhat more general context of approximation of elements f of a normed space F . We assume, as is standard for radial basis function (RBF) networks, that available approximations of f , as well as information about f , are limited. That is, the approximation of f is constructed as a linear combination of a limited collection of basis elements (neuron activation functions), and the construction uses only values of some functionals at f (e.g., examples or point values of f ). Such situations are typical in RBF network models, where one wants to build a network that approximates a multivariate i-o function f . We show that the complexity can be essentially split into two independent parts related to information "-complexity and neural "-complexity. Our analysis is done in the worst case setting, and integrates elements of i..

    Guest Editors’ preface

    No full text
    corecore